Considering Zero as a number, like any other number, and looking at the problems zero as a number caused, as well as the benefits that came from it

The Problems

As the Indians began running into zero as a number, the bizarre properties of zero began to reveal themselves. What happens when you add one number to another number? It becomes a different number, or you could say it changes and becomes larger. Add zero to any other number and it doesn't change, it stays the same. The same idea holds true with subtraction. Now, observe that by adding a number to itself enough times, it will eventually exceed any other number. This is not true with zero. It gets more bizarre when you consider the properties of multiplying numbers. All at once, multiply all the numbers on the number line by 2. What happens, the number line stretches or gets bigger. The number 1 now becomes 2. The number 5 becomes 10. But, multiply all numbers on the number line by zero at once, and the entire number line collapses to zero.

Things get far worse when you consider division. Given that a number n is multiplied by 2, we would have 2n, so how would you undo the multiplication to get the number n back? You would divide by 2, thus, 2n/2=n. Moving forward, we know that any number, n, times Zero, equals Zero. 0*3=0, so then, how would you undo the multiplication to get the number 3 back? Well, there is no reason yet to say we can't divide it by zero. This would give us (0*3)/0=3, but we also could say that (0*2)/0=2, (0*4)/0=4, and (0*6)/0=6. This can't be true since 0*2 ,0*3,0*4,and 0*6 all equal 0. So then this would imply that 0/0=2,3,4,and 6. Since, we all know that a fraction just is a representation of one, specific number, this feels more than a little contradictory. What if we were to take this a little further and consider 1/0*0. Should this equal 1, where the zeros cancel themselves out, or should it equal zero, since anything times zero is zero? To say it at the least, accepting zero as a number, like any other number, causes you to have to deal with a lot of problems. (The Ideas and principles illustrated in this paragraph are based from Charles Seife's book in the last section of chapter 1.) Seife also gives a very pragmatic proof that shows what happens when you divide by zero just once. Below is my interpretation and depiction of his proof:

An Activity worksheet for the problems zero causes in mathematics

Applet to learn about Zeno's Paradox

Back to Top

Why you cant divide by zero A current debate over whether zero is even or odd

Algebra

So if there were so many problems zero caused in mathematics, why was it worth it?

To explain why it is worth it, you must look at the next step in history, where algebra took form. The term algebra comes from a book called Al-jabr, written by al-Khowarizmi, a Muslim scholar in Bagdad. He lived from 1040 A.D to 1123 A.D., during the time of the Islamic conquest. (Seife,pp.72). The Islamic conquest covered a large part of the world, including India, where zero was already being recognized as a number. As the Islamic nation spread, they learned from the people they conquered. This is how Al-Khowarizmi learned what the Indians knew about the number zero. He took that knowledge and expanded on it to consider variables. He refined the process, developing rules and methods for solving equations, eventually writing his book, Al-jabr. His book has greatly impacted our numeric system, (including the numerals, notation, and rules of algebra we now use.) Fortunately enough for us, this included Zero, the number, and the symbol. (Burton, 2006)

Imagine algebra without being able to consider any negative numbers. Not just avoiding negative solutions, but never being able to consider negative numbers at all in your work. After trying a few examples, it is quite easy to see the power zero gave Algebra as zero is like a door to the negative number. With zero comes its counterpart, infinity, which the Indians had encountered, but now with algebra, the world was really forced to face it. Struggling to know what to do with infinity eventually led to the idea of limits. The ability to understand limits was a breakthrough that lead to the foundations of a branch of mathematics called calculus, a powerful tool that lead to the innovations of the modern world.

enough that limits provide the foundation for calculus to stand on, I must first mention Calculus (which recognized infinity) as it was the first to be invented.

Back to Top

Calculus

Newton-Leibniz calculus unquestionably contributed to the creation of the modern world we are in today and deserves a great amount of recognition. But, bearing two names, who did what and who deserves more credit? A huge controversy from the beginning, still leaves people arguing today. Calculus in essence was simultaneously developed by both Newton and Leibniz, or at least there is some evidence supporting that. 

            Newton (1642-1727 A.D,) actually discovered calculus in the undeveloped form of fluxions, in the same two years of leisure in Woolsthrope (1664-1666 A.D,)that he discovered the refractive properties of white light, and universal gravitation. The development of fluxions follows from Newton’s discovery of the binomial theorem and its use in producing an infinite series.  Newton’s discovery of the binomial theorem stemmed from Wallis’s tables of calculated infinite series that today would have been written as  for select integer values of n. From there, Newton considered the results of changing the upper bound to a variable.  Every step along the way, newton was discovering calculus. Through mostly observation, Newton discovered the binomial theorem; although never actually proving it. (Burton, pp, 391-396)

            In an attempt to find a pattern of the infinite series given by Newton was able to find the area of a rectangular hyperbola of the equation  to be the series expansion of the natural logarithm.  Unfortunately Nicholas Mercator had already discovered the reduction of  to an infinite series. In an effort to secure priority for his work, Newton showed his work to Isaac Barrow who realized the greatness of his work in its ability to solve for area of the hyperbola, but in a very general way. This in essence is the summary of calculus, the ability to solve for areas, slopes, and much more, in a general way, and Newton was realizing the Integration part of it. His work was published entitled De Analysi per Aequationes Numero Terminorum Infinitas, 1669 A.D.. In this book, without proof, he stated a rule for computing the area under the curve , which according to the rule is . To elaborate on this, Newton assumed the area z= . Using "o" as an infinitesimal, newton calculated the original line by considering an infinitesimal increase in x and in z, then using the binomial expansion, subtracting the first given equation, dividing both sides of the equation by "o," and then finally removing an terms left containing "o." This is where Newton first began to recognize the derivative process in his work. However, Newton still had no rigorous proof for the rules he used, especially for the properties of the infinitesimal, "o." (Burton pp. 417-420)

Newton first considers "o" (an infinitesimal) not as zero, in order to divide by it. Then, Newton recognizes it as zero in order to cancel out the remaining, un-needed terms left after the binomial expansion; obviously violating the laws of mathematics. Later, in an attempt to broaden his method to other curves, Newton conceived quantities differently and replaced infinitesimals with fluxions and fluents. The variable x could be a fluent, and its rate of change would be called the fluxion of the fluent, designated as . Today this could be considered as  and the infinitely small increase of x over a very small period of time would be considered the moment of the fulent, denoted. This method involved the substitution of 'the fluent + the moment of the fluent,' for the fluent in an equation. However different this method appears, "o" still is assumed to be non-zero in order divide the equation, and then it is assumed to be infinitely little so that Newton could reject them. This method is much the same as with infinitesimals, but now Newton considered the change in the variables with respect to time. Still fraught with the confusion the "zero" like property was presenting, Newton attempted to remedy the illegitimate step used by changing again to using ultimate ratios. In it he attempted to explain how "o" was used as zero and not as zero by claiming that it wasn’t that the quantities “vanished,” but the rate at which they finished.  Newton might have been close to recognizing limits, but was never able to explain his use of ultimate ratios in rigor. (Burton pp. 417-420)

Gottfried Leibniz (1646-1716 A.D,) about the same time as Newton, was also discovering calculus.  Like Newton, the discovery seemed to stem from dabbling in infinite series. Much before his discovery, Leibniz had asserted that a “Calculus of reasoning” could be developed that would allow for the solutions of problems to be figured automatically. Without going into the debate of whether or not Leibniz came up with calculus on his own or not, between 1672 and 1676 A.D., Leibniz developed his component features and notation of his Calculus. The notation is what is most contributed to Leibniz as there is no controversy over that matter. The application of it is considered to be very thorough and user friendly. Still, in his Calculus, he advance the methods of solving for tangents. In so, he devised a method for taking the properties of tangents of a curve, and solved for the equation of the curve itself in a method of integration. His method of Integration was first developed with bars as parentheses and used  the word "omn." However, "omn" was quickly ameliorated by adding the familiar ∫ to replace “omn” to mean sum. Before long, his notation shifted again to the familiar . (Burton pp. 414- 417)

Leibniz correctly determined the product rule but, like Newton,  he discarded the term dxdy with the justification that it was infinitely small compared to the rest. However, it does not appear to me that in his method, the equation was ever divided by zero. Still, he showed no rigorous proof or reasoning why a term can just disappear.  Using this product rule, Leibniz came up with the quotient rule and then the power rule. Even with the dispute with Newton, Newton still recognized that Leibniz method freed calculus from geometry, in addition to standardizing and generalizing the results; even though, Newton argued that the results were nothing new(Burton pp.417.)  It is very important to note that, like Newton, Leibniz also never recognized the concept of limits and its importance in calculus.

Both Newton’s and Leibniz methods contained results without rigorous proof, not being able to get past zero. They both came up with the same rules, and both came from a background of dealing with infinite series. The major differences are their notations and some of the steps in their methods. It’s interesting to ponder on why Newton never had the intention of publishing his discovery of calculus and whether Leibniz came about Calculus on his own accord, but we can stand on the fact that Newton discovered Calculus and Leibniz came up with a good method of notation and interpretation.

Back to Top

Limits

Limits, finally gave merit to calculus. It provide a reason to why Calculus works. It wasn't until Jean D'Alembert (1717- 1783 A.D.) suggests the theory of limits in a article entitled "différentiel," in his works Encyclopédie-volume 4 in 1754, that Calculus began to have ground to stand on(Burton pp.417.) However, because of his geometric reasoning, he did not have satisfactory backing of his ideas of limits. Also, this is when Zeno's paradox finally began to unravel, but I'll get into that latter. Augustine Louis Cauchy (1789-1857A.D.) finally provided a non-geometric theory of limits that appeased most of the critics of limits (Burton pp. 604.) Inspired by Lagrange's work on Taylor series expansion, Cauchy wrote the treatise Course d' analyse de I' Ecole Royale Polytechnizue in 1821 A.D. In it he formulates a definition of Limits. His definition was:

When successive values attributed to a variable approach indefinitely to a fixed value so as to end by differing from it by as little as one wishes, this last is called the limit of all others.

I would like to go into this much further, but I will leave this to you. From this you will see that Cauchy removed the geometric ties to the theory of limits, and further went on to provide the key concepts of continuity, differentiability, and the definite integral: leading much of the world to now regard him as "the creator of calculus..." (Burton, 2006).

Back to Top


Download this website in paper form